A rapidly convergent descent method for minimization

نویسندگان

  • R. Fletcher
  • M. J. D. Powell
چکیده

We are concerned in this paper with the general problem of finding an unrestricted local minimum of a function J[xu x2 • • •, xn) of several variables xx, x2, • •., xn. We suppose that the function of interest can be calculated at all points. It is convenient to group functions into two main classes according to whether the gradient vector g, = Wbx, is defined analytically at each point or must be estimated from the differences of values of/. The method described in this paper is applicable to the case of a defined gradient. For the other case a useful method and general discussion are given by Rosenbrock (1960). Methods using the gradient include the classical method of steepest descents (Courant, 1943; Curry, 1944; and Householder, 1953), Levenberg's modification of damped steepest descents (1944), a somewhat similar variation due to Booth (1957), the conjugate gradient method of Hestenes and Stiefel (1952), similar methods of Martin and Tee (1961), the "Partan" method of Shah, Buehler and Kempthorne (1961), and a method due to Powell (1962). In this paper we describe a powerful method with rapid convergence which is based upon a procedure described by Davidon (1959). Davidon's work has been little publicized, but in our opinion constitutes a considerable advance over current alternatives. We have made both a simplification by which certain orthogonality conditions which are important to the rate of attaining the solution are preserved, and also an improvement in the criterion of convergence.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hybrid steepest-descent method with sequential and functional errors in Banach space

Let $X$ be a reflexive Banach space, $T:Xto X$ be a nonexpansive mapping with $C=Fix(T)neqemptyset$ and $F:Xto X$ be $delta$-strongly accretive and $lambda$- strictly pseudocotractive with $delta+lambda>1$. In this paper, we present modified hybrid steepest-descent methods, involving sequential errors and functional errors with functions admitting a center, which generate convergent sequences ...

متن کامل

Constrained Nonlinear Optimal Control via a Hybrid BA-SD

The non-convex behavior presented by nonlinear systems limits the application of classical optimization techniques to solve optimal control problems for these kinds of systems. This paper proposes a hybrid algorithm, namely BA-SD, by combining Bee algorithm (BA) with steepest descent (SD) method for numerically solving nonlinear optimal control (NOC) problems. The proposed algorithm includes th...

متن کامل

A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements

We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With O(rκn log n) random measurements of a positive semidefinite n×nmatrix of rank r and condition number κ, our method is guaranteed to converge linearly to the global optimum.

متن کامل

Stability of Convergent Continuous Descent Methods

We consider continuous descent methods for the minimization of convex functions defined on a general Banach space. In our previous work we showed that most of them (in the sense of Baire category) converged. In the present paper we show that convergent continuous descent methods are stable under small perturbations.

متن کامل

Block Bfgs Methods

We introduce a quasi-Newton method with block updates called Block BFGS. We show that this method, performed with inexact Armijo-Wolfe line searches, converges globally and superlinearly under the same convexity assumptions as BFGS. We also show that Block BFGS is globally convergent to a stationary point when applied to non-convex functions with bounded Hessian, and discuss other modifications...

متن کامل

Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure

Stochastic optimization algorithms with variance reduction have proven successful for minimizing large finite sums of functions. However, in the context of empirical risk minimization, it is often helpful to augment the training set by considering random perturbations of input examples. In this case, the objective is no longer a finite sum, and the main candidate for optimization is the stochas...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008